Goto

Collaborating Authors

 wrong kind


We're Focusing on the Wrong Kind of AI Apocalypse

TIME - Tech

Conversations about the future of AI are too apocalyptic. Or rather, they focus on the wrong kind of apocalypse. There is considerable concern of the future of AI, especially as a number of prominent computer scientists have raised, the risks of Artificial General Intelligence (AGI)--an AI smarter than a human being. They worry that an AGI will lead to mass unemployment or that AI will grow beyond human control--or worse (the movies Terminator and 2001 come to mind). Discussing these concerns seems important, as does thinking about the much more mundane and immediate threats of misinformation, deep fakes, and proliferation enabled by AI.


Can Universities Combat the 'Wrong Kind of AI'?

Communications of the ACM

The May 20, 2021 issue of the Boston Review hosted a Forum on the theme "AI's Future Doesn't Have to Be Dystopian," with a lead essay by the MIT economist Daron Acemoglu and responses from a range of natural and social science researchers.1 While recognizing the great potential of AI to increase human productivity and create jobs and shared prosperity, he cautioned that "current AI research is too narrowly focused on making advances in a limited set of domains and pays insufficient attention to its disruptive effects on the very fabric of society." In this he was following in the wake of a series of recent books that have centered on the disruptive effects of AI and automation on the future of jobs.6,7,8 What is the "wrong kind of AI"? According to Acemoglu and Restrepo, "the wrong kind of AI, primarily focusing on automation, tends to generate benefits for a narrow part of society that is already rich and politically powerful, including highly skilled professionals and companies whose business model is centered on automation and data."


"Banks hired wrong people," ex-JPM electronic trading chief

#artificialintelligence

Sameer Gupta knows about electronic markets. The former COO for J.P. Morgan's global electronic equities trading and Americas high touch and program trading business has been steeped in trade mechanization since graduating from Carnegie Mellon University in 2003. He's worked as a programmer at Goldman, a business development executive at NYSE Euronext and an electronic trading implementer at J.P. Morgan. Now, he's C.O.O. of iSentium, a company that uses intelligent algorithms to turn social media sentiment into tradeable data. And he says banks are getting their approach to artificial intelligence (AI) all wrong.